Corrected Proof of the Result of ‘a Prediction Error Property of the Lasso Estimator and Its Generalization’ by Huang (2003)
نویسندگان
چکیده
The Lasso achieves variance reduction and variable selection by solving an 1-regularized least squares problem. Huang (2003) claims that ‘there always exists an interval of regularization parameter values such that the corresponding mean squared prediction error for the Lasso estimator is smaller than for the ordinary least square estimator’. This result is correct. However, its proof in Huang (2003) is not. This paper presents a corrected proof of the claim, which exposes and uses some interesting fundamental properties of the Lasso.
منابع مشابه
A New Ridge Estimator in Linear Measurement Error Model with Stochastic Linear Restrictions
In this paper, we propose a new ridge-type estimator called the new mixed ridge estimator (NMRE) by unifying the sample and prior information in linear measurement error model with additional stochastic linear restrictions. The new estimator is a generalization of the mixed estimator (ME) and ridge estimator (RE). The performances of this new estimator and mixed ridge estimator (MRE) against th...
متن کاملDifferenced-Based Double Shrinking in Partial Linear Models
Partial linear model is very flexible when the relation between the covariates and responses, either parametric and nonparametric. However, estimation of the regression coefficients is challenging since one must also estimate the nonparametric component simultaneously. As a remedy, the differencing approach, to eliminate the nonparametric component and estimate the regression coefficients, can ...
متن کاملMammalian Eye Gene Expression Using Support Vector Regression to Evaluate a Strategy for Detecting Human Eye Disease
Background and purpose: Machine learning is a class of modern and strong tools that can solve many important problems that nowadays humans may be faced with. Support vector regression (SVR) is a way to build a regression model which is an incredible member of the machine learning family. SVR has been proven to be an effective tool in real-value function estimation. As a supervised-learning appr...
متن کاملA short remark on the result of Jozsef Sandor
It is pointed out that, one of the results in the recently published article, ’On the Iyengar-Madhava Rao-Nanjundiah inequality and it’s hyperbolic version’ [3] by J´ozsef S´andor is logically incorrect and new corrected result with it’s proof is presented.
متن کاملBayesin estimation and prediction whit multiply type-II censored sample of sequential order statistics from one-and-two-parameter exponential distribution
In this article introduce the sequential order statistics. Therefore based on multiply Type-II censored sample of sequential order statistics, Bayesian estimators are derived for the parameters of one- and two- parameter exponential distributions under the assumption that the prior distribution is given by an inverse gamma distribution and the Bayes estimator with respect to squared error loss ...
متن کامل